Goto

Collaborating Authors

 probabilistic rule


A Distribution Semantics for Probabilistic Term Rewriting

Vidal, Germán

arXiv.org Artificial Intelligence

Probabilistic programming is becoming increasingly popular thanks to its ability to specify problems with a certain degree of uncertainty. In this work, we focus on term rewriting, a well-known computational formalism. In particular, we consider systems that combine traditional rewriting rules with probabilities. Then, we define a distribution semantics for such systems that can be used to model the probability of reducing a term to some value. We also show how to compute a set of "explanations" for a given reduction, which can be used to compute its probability. Finally, we illustrate our approach with several examples and outline a couple of extensions that may prove useful to improve the expressive power of probabilistic rewrite systems.


Tensor-Networks-based Learning of Probabilistic Cellular Automata Dynamics

Casagrande, Heitor P., Xing, Bo, Munro, William J., Guo, Chu, Poletti, Dario

arXiv.org Artificial Intelligence

Algorithms developed to solve many-body quantum problems, like tensor networks, can turn into powerful quantum-inspired tools to tackle problems in the classical domain. In this work, we focus on matrix product operators, a prominent numerical technique to study many-body quantum systems, especially in one dimension. It has been previously shown that such a tool can be used for classification, learning of deterministic sequence-to-sequence processes and of generic quantum processes. We further develop a matrix product operator algorithm to learn probabilistic sequence-to-sequence processes and apply this algorithm to probabilistic cellular automata. This new approach can accurately learn probabilistic cellular automata processes in different conditions, even when the process is a probabilistic mixture of different chaotic rules. In addition, we find that the ability to learn these dynamics is a function of the bit-wise difference between the rules and whether one is much more likely than the other.


In Need for Both Accuracy and Interpretability? Give Probabilistic Rules a Try.

#artificialintelligence

Many algorithms are capable of underpinning decision systems. They vary in efficacy regarding properties such as accuracy, speed, and interpretability. In order to fulfill business requirements and objectives, companies are often torn about which algorithms to use. One of the most common yet thorniest issues is the tradeoff between accuracy and interpretability, especially when business goals require the algorithm to be both, but available methods outperform in one area while underperforming in the other. Logistic regression models, for one, are highly interpretable, but not necessarily accurate.


FairDistillation: Mitigating Stereotyping in Language Models

Delobelle, Pieter, Berendt, Bettina

arXiv.org Artificial Intelligence

Large pre-trained language models are successfully being used in a variety of tasks, across many languages. With this ever-increasing usage, the risk of harmful side effects also rises, for example by reproducing and reinforcing stereotypes. However, detecting and mitigating these harms is difficult to do in general and becomes computationally expensive when tackling multiple languages or when considering different biases. To address this, we present FairDistillation: a cross-lingual method based on knowledge distillation to construct smaller language models while controlling for specific biases. We found that our distillation method does not negatively affect the downstream performance on most tasks and successfully mitigates stereotyping and representational harms. We demonstrate that FairDistillation can create fairer language models at a considerably lower cost than alternative approaches.


Inducing Probabilistic Relational Rules from Probabilistic Examples

Raedt, Luc De (KU Leuven) | Dries, Anton (KU Leuven) | Thon, Ingo (KU Leuven) | Broeck, Guy Van den (KU Leuven) | Verbeke, Mathias (KU Leuven)

AAAI Conferences

We study the problem of inducing logic programs in a probabilistic setting, in which both the example descriptions and their classification can be probabilistic. The setting is incorporated in the probabilistic rule learner ProbFOIL+, which combines principles of the rule learner FOIL with ProbLog, a probabilistic Prolog. We illustrate the approach by applying it to the knowledge base of NELL, the Never-Ending Language Learner.


MESA: Maximum Entropy by Simulated Annealing

Paaß, Gerhard

arXiv.org Artificial Intelligence

Probabilistic reasoning systems combine different probabilistic rules and probabilistic facts to arrive at the desired probability values of consequences. In this paper we describe the MESA-algorithm (Maximum Entropy by Simulated Annealing) that derives a joint distribution of variables or propositions. It takes into account the reliability of probability values and can resolve conflicts between contradictory statements. The joint distribution is represented in terms of marginal distributions and therefore allows to process large inference networks and to determine desired probability values with high precision. The procedure derives a maximum entropy distribution subject to the given constraints. It can be applied to inference networks of arbitrary topology and may be extended into a number of directions.


An Information Theoretic Approach to Rule-Based Connectionist Expert Systems

Goodman, Rodney M., Miller, John W., Smyth, Padhraic

Neural Information Processing Systems

We discuss in this paper architectures for executing probabilistic rule-bases in a parallel manner, using as a theoretical basis recently introduced information-theoretic models. We will begin by describing our (non-neural) learning algorithm and theory of quantitative rule modelling, followed by a discussion on the exact nature of two particular models. Finally we work through an example of our approach, going from database to rules to inference network, and compare the network's performance with the theoretical limits for specific problems.


An Information Theoretic Approach to Rule-Based Connectionist Expert Systems

Goodman, Rodney M., Miller, John W., Smyth, Padhraic

Neural Information Processing Systems

We discuss in this paper architectures for executing probabilistic rule-bases in a parallel manner, using as a theoretical basis recently introduced information-theoretic models. We will begin by describing our (non-neural) learning algorithm and theory of quantitative rule modelling, followed by a discussion on the exact nature of two particular models. Finally we work through an example of our approach, going from database to rules to inference network, and compare the network's performance with the theoretical limits for specific problems.


An Information Theoretic Approach to Rule-Based Connectionist Expert Systems

Goodman, Rodney M., Miller, John W., Smyth, Padhraic

Neural Information Processing Systems

We discuss in this paper architectures for executing probabilistic rule-bases in a parallel manner,using as a theoretical basis recently introduced information-theoretic models. We will begin by describing our (non-neural) learning algorithm and theory of quantitative rule modelling, followed by a discussion on the exact nature of two particular models. Finally we work through an example of our approach, going from database to rules to inference network, and compare the network's performance with the theoretical limits for specific problems.